Definition 9.1 A point estimate is any function T (X 1,..., X n ) of a random sample. We often write an estimator of the parameter θ as ˆθ.
|
|
- Abel Neal
- 5 years ago
- Views:
Transcription
1 9 Point estimation 9.1 Rationale behind point estimation When sampling from a population described by a pdf f(x θ) or probability function P [X = x θ] knowledge of θ gives knowledge of the entire population. Hence it is natural to seek a method of finding a good estimator of θ. Often the parameter θ has a meaningful physical interpretation, so there is a direct interest in obtaining a good estimate of θ. It may be the case that some function of θ, say τ(θ), is of interest. Definition 9.1 A point estimate is any function T (X 1,..., X n ) of a random sample. We often write an estimator of the parameter θ as ˆθ. An estimator of θ is a function of random variables, so is itself a random variable. A value of the estimator for any realization x 1,..., x n of the random sample, that is T (x 1,..., x n ) is a real number and is called an estimate. 9.2 Properties of good estimators There are a number of properties of estimators which may be desirable. 1. Unbiasedness. If E[ˆθ] = θ we say the estimator is unbiased. It means that the distribution of the random variable ˆθ should be centred about the true value θ. The bias of an estimator is defined as Bias(ˆθ) = E[ˆθ] θ. 2. Small variance. The variance of the estimator should be small. estimator about its mean should be small. The spread of the Var[ˆθ] = E[(ˆθ E(ˆθ)) 2 ] 1
2 3. Small mean square error. If the estimator ˆθ is biased then it makes more sense to want the spread about θ to be small. This is measured by the mean square error. MSE(ˆθ) = E[(ˆθ θ) 2 ] Theorem 9.1 The mean square error can be expressed as the sum of the variance of the estimator and the square of the bias, MSE(ˆθ) = Var[ˆθ] + (Bias(ˆθ)) 2. The identity is easily obtained MSE(ˆθ) = E[(ˆθ θ) 2 ] = E[(ˆθ E(ˆθ) + E(ˆθ) θ) 2 ] = E[(ˆθ E(ˆθ)) 2 ] + E[(E(ˆθ) θ) 2 ] + 2 E[(ˆθ E(ˆθ))(E(ˆθ) θ)] = E[(ˆθ E(ˆθ)) 2 ] + (E(ˆθ) θ) 2 = Var[ˆθ] + (Bias(ˆθ)) 2 since the cross product term is zero because E[(ˆθ E(ˆθ))] = 0 and the square of the bias is a constant. 4. Consistency Consistency is a large-sample property, since it describes the limiting behaviour of the estimator ˆθ as the sample size tends to infinity. Definition 9.2 If ˆθ n is an estimator of θ based on a random sample of size n, we say that ˆθ n is consistent for θ if lim P ( ˆθ n θ < ɛ) = 1. n A sufficient condition for consistency is that lim MSE(ˆθ n ) = 0. n 2
3 The MSE is an important criterion for comparing two estimators. Let ˆθ 1 and ˆθ 2 be two estimators of the parameter θ. Then the relative efficiency of ˆθ 1 to ˆθ 2 is defined as Eff = MSE(ˆθ 2 ) MSE(ˆθ 1 ). If Eff < 1 we would conclude that ˆθ 2 is a better estimator than ˆθ 1. Note that if ˆθ 1 and ˆθ 2 are unbiased then Eff = Var(ˆθ 2 ) Var(ˆθ 1 ). Example Let X 1,..., X n be a random sample from a normal population with mean µ and variance σ 2. One obvious estimator of the mean µ is the sample mean X. We know that E[ X] = µ and Var[ X] = σ 2 /n. Consider the alternative estimator ˆθ = X 1. We know that E[X 1 ] = µ and Var[X 1 ] = σ 2. Thus the efficiency of the estimator X 1 to the sample mean is Eff = MSE( X) MSE(X 1 ) = σ2 /n = 1 σ 2 n. Hence the sample mean is a better estimator than a single observation. Note that since MSE( X) = σ 2 /n 0 n we see that X is a consistent estimator of µ. 9.3 Methods of estimation Method of moments Suppose that X is a continuous rv with pdf f(x θ 1,..., θ k ) or a discrete rv with probability function P (X = x θ 1,..., θ k ) characterised by k unknown parameters. Let X 1,..., X n be a random sample from X. The first k sample moments about the origin are defined as m t = 1 n Xi t t = 1, 2,..., k. 3
4 The first k population moments about the origin are µ t = E[X t ] = x t f(x θ 1,..., θ k )dx if X is continuous, or µ t = E[X t ] = j x t jp (X = x j θ 1,..., θ k ) if X is discrete. The population moments are, in general, functions of the k unknown parameters θ t. Equating sample moments and population moments yields k simultaneous equations in k unknowns, θ t, t = 1,..., k, that is µ t = m t t = 1,..., k. The solution to the system, denoted by ˆθ 1,..., ˆθ k gives the moment estimators of the parameters θ 1,..., θ k. Example Let X N(µ, σ 2 ) where µ and σ 2 are unknown. Let X 1,..., X n be a random sample from X. The population moments are The sample moments are µ 1 = E[X] = µ µ 2 = E[X 2 ] = σ 2 + µ 2. m 1 = 1 n m 2 = 1 n X i Xi 2. Hence we have the system of two equations µ = 1 n σ 2 + µ 2 = 1 n X i = X Xi 2 4
5 which have the solution ˆµ = X ˆσ 2 = 1 n Xi 2 X 2 = 1 n ( X 2 i n X 2 ) = 1 n (Xi X) 2. So the moment estimator of µ is the sample mean. The estimator of variance is the sample second moment about the sample mean. It is not the sample variance (and hence it is biased). Example Let X 1,..., X n be a random sample from a population having a uniform distribution on the interval (a, b), where a and b are unknown. Use the method of moments to find estimators of a and b. Let the following values be a realisation of a random sample of size 10: 2.3, 4.2, 5.3, 5.7, 8.1, 2.8, 6.2, 4.4, 8.5, 3.5. Calculate the moment estimates of a and b based on these data. Do the estimates have reasonable values? Let X i U(a, b), i = 1,..., n. The moment equations are: { X = E(X) = µ 1 n n X2 i = E(X 2 ) = σ 2 + µ 2 { µ = X σ 2 = 1 n n X2 i µ 2 Also, for a uniform distribution U(a, b) we have E(X) = a+b and var(x) = 2 (b a) 2. Hence, 12 { a+b = X 2 (b a) 2 = 1 n 12 n X2 i µ 2 { a + b = 2 X b a = 12( 1 n n X2 i µ2 ). 5
6 Hence, { a = 2 X b 2b = 2 X + 12( 1 n n X2 i µ2 ) Finally, â = X 3( 1 n n X2 i X 2 ) ˆb = X + 3( 1 n n X2 i X 2 ) The estimates of a and of b for the given data are: â = = 1.67 ˆb = = The estimates seem to be reasonable, as â < min x i and ˆb > max x i Method of maximum likelihood Suppose that X is a continuous random variable with pdf f(x θ 1,..., θ k ) or a discrete rv with probability function P (X = x θ 1,..., θ k ). We make the following definition. Definition 9.3 Let X 1,..., X n be a random sample from X and let x 1,..., x n be the observed values of X 1,..., X n. The function of the parameters θ = (θ 1,..., θ k ) defined by if X is continuous, or L(θ x 1,..., x n ) = f(x 1,..., x n θ) L(θ x 1,..., x n ) = P (X 1 = x 1,..., X n = x n θ) if X is discrete, is called the likelihood function. The distinction between the likelihood function and the pdf (or prob function) is that the former is a function of the parameters θ while the latter is a function of the variable X. Example Let X have an exponential distribution with pdf f(x θ) = θ exp( θx). 6
7 Then the likelihood function is L(θ x 1,..., x n ) = f(x 1,..., x n θ) = f(x 1 θ) f(x n θ) by independence = θ exp( θx 1 ) θ exp( θx n ) = θ n exp( θ i x i ). Now consider two values of θ: θ 1 and θ 2. If L(θ 1 x 1,..., x n ) > L(θ 2 x 1,..., x n ) then the sample we have actually observed is more likely to have occurred if θ = θ 1 than if θ = θ 2. This can be interpreted as saying that θ 1 is a more plausible value for the true value θ than is θ 2. Thus it seems natural to find the value of θ for which the function L(θ x 1,..., x n ) attains a maximum. Definition 9.4 Let x 1,..., x n be a realisation of a random sample X 1,..., X n from X. We call x 1,..., x n a sample point from X. Definition 9.5 For each sample point x 1,..., x n let ˆθ be a parameter value at which L(θ x 1,..., x n ) attains its maximum. We call ˆθ a maximum likelihood estimator (MLE) of the parameter θ. Note the definition holds whether θ is a single unknown parameter or a vector of k unknown parameters. If the likelihood function is differentiable in θ i possible candidates for the MLE are the values of θ = (θ 1,..., θ k ) that solve θ i L(θ x 1,..., x n ) = 0 i = 1,..., n. Note that the zeroes of the first derivative only locate extreme points in the interior of the domain of the function. If the extreme occurs on the boundary the first derivative may not be zero. Note we should check that second derivatives are negative so that we do have a maximum. The likelihood function is a product of marginal pdf s (or probability functions). It is often useful to consider the logarithm of the likelihood function. This is called the log-likelihood function. 7
8 Example (continued) For the exponential distribution the log-likelihood is ln L(θ x 1,..., x n ) = ln[θ n exp( θ i x i )] The first derivative with respect to θ is d ln L dθ setting this equal to zero we see that ˆθ = = n ln[θ] θ x i = n θ x i n xi = 1 x. The second derivative with respect to θ for all θ. d 2 ln L dθ 2 = n θ 2 < 0 So ˆθ = 1/ x maximises ln L and so maximises L. Thus ˆθ = 1/ x is the maximum likelihood estimator of θ. Example A binomial experiment consisting of n trials resulted in observations y 1,..., y n, where y i = 1 if the ith trial was a success, y i = 0 otherwise. Find the maximum likelihood estimator of p, the probability of a success. The likelihood of the observed sample is L(p y 1,..., y n ) = p y (1 p) n y where y = y i. The log likelihood is ln L(p y 1,..., y n ) = y ln p + (n y) ln(1 p). Differentiating with respect to p we have d ln L dp = y p n y 1 p. The value of p that satisfies d ln L dp = 0 is the solution of y ˆp = n y 1 ˆp. 8
9 Solving we see that ˆp = y/n. Note that so our solution is indeed a maximum. d 2 ln L dp 2 = y p 2 n y (1 p) 2 < 0 Example Let X N(µ, 1) and let x 1,..., x n be a sample point from X. Find the MLE of µ. The likelihood is L(µ x 1,..., x n ) = = n 1 exp { 12 } (x i µ) 2 2π { 1 exp 1 (2π) n/2 2 } (x i µ) 2 and the log-likelihood is ln L(µ x 1,..., x n ) = ln(2π) n/2 + ln exp { 1 2 = n 2 ln(2π) 1 2 (x i µ) 2. The first derivative is d ln L dµ = (x i µ) = x i nµ. Setting this equal to zero we have } (x i µ) 2 The second derivative is ˆµ = 1 n xi. d 2 ln L dµ 2 = n < 0 and hence ˆµ = X is the maximum likelihood estimator. Example Let X 1,..., X n be a random sample from a uniform distribution on the interval (0, θ). The pdf of X is p(x θ) = 1 θ 0 < x < θ. 9
10 Find the maximum likelihood estimator of θ. The likelihood is given by L(θ x 1,..., x n ) = n 1 θ = 1 θ. n Note that L is a monotonically decreasing function of θ and hence that nowhere in the interval 0 < θ < is dl dθ equal to zero. However note that L increases as θ decreases and that θ must be equal to or greater than the maximum observation in the set x 1, x 2,..., x n. of θ that maximises L is the largest observation in the sample. ˆθ = X (n) = max(x 1,..., X n ). Hence the value That is The following theorem tells us how to find the MLE of any function of a parameter for which we know the MLE. Theorem 9.2 Invariance property of MLE. If ˆθ is the MLE of θ, then for any function g(θ) the MLE is g(ˆθ). Example The variance of an exponential distribution having parameter θ is 1/θ 2. Since the MLE of θ is 1/ X the MLE of the variance is X 2. We note the following without proof. Maximum likelihood estimators and their properties will be discussed in more detail in Statistical Inference. Under some general conditions MLEs are consistent estimators, both their bias and variance tend to zero as n tends to infinity. For large samples MLEs have minimum variance, no other consistent estimator has effectively smaller variance. For large samples the distribution of an MLE is approximately normal. 10
Lecture 10: Point Estimation
Lecture 10: Point Estimation MSU-STT-351-Sum-17B (P. Vellaisamy: MSU-STT-351-Sum-17B) Probability & Statistics for Engineers 1 / 31 Basic Concepts of Point Estimation A point estimate of a parameter θ,
More informationBack to estimators...
Back to estimators... So far, we have: Identified estimators for common parameters Discussed the sampling distributions of estimators Introduced ways to judge the goodness of an estimator (bias, MSE, etc.)
More informationChapter 4: Asymptotic Properties of MLE (Part 3)
Chapter 4: Asymptotic Properties of MLE (Part 3) Daniel O. Scharfstein 09/30/13 1 / 1 Breakdown of Assumptions Non-Existence of the MLE Multiple Solutions to Maximization Problem Multiple Solutions to
More informationPoint Estimators. STATISTICS Lecture no. 10. Department of Econometrics FEM UO Brno office 69a, tel
STATISTICS Lecture no. 10 Department of Econometrics FEM UO Brno office 69a, tel. 973 442029 email:jiri.neubauer@unob.cz 8. 12. 2009 Introduction Suppose that we manufacture lightbulbs and we want to state
More informationChapter 8. Introduction to Statistical Inference
Chapter 8. Introduction to Statistical Inference Point Estimation Statistical inference is to draw some type of conclusion about one or more parameters(population characteristics). Now you know that a
More informationChapter 8: Sampling distributions of estimators Sections
Chapter 8 continued Chapter 8: Sampling distributions of estimators Sections 8.1 Sampling distribution of a statistic 8.2 The Chi-square distributions 8.3 Joint Distribution of the sample mean and sample
More informationChapter 7 - Lecture 1 General concepts and criteria
Chapter 7 - Lecture 1 General concepts and criteria January 29th, 2010 Best estimator Mean Square error Unbiased estimators Example Unbiased estimators not unique Special case MVUE Bootstrap General Question
More informationTwo hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER
Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS Answer any FOUR of the SIX questions.
More informationStatistical estimation
Statistical estimation Statistical modelling: theory and practice Gilles Guillot gigu@dtu.dk September 3, 2013 Gilles Guillot (gigu@dtu.dk) Estimation September 3, 2013 1 / 27 1 Introductory example 2
More informationPoint Estimation. Edwin Leuven
Point Estimation Edwin Leuven Introduction Last time we reviewed statistical inference We saw that while in probability we ask: given a data generating process, what are the properties of the outcomes?
More informationChapter 6: Point Estimation
Chapter 6: Point Estimation Professor Sharabati Purdue University March 10, 2014 Professor Sharabati (Purdue University) Point Estimation Spring 2014 1 / 37 Chapter Overview Point estimator and point estimate
More informationPoint Estimation. Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage
6 Point Estimation Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage Point Estimation Statistical inference: directed toward conclusions about one or more parameters. We will use the generic
More informationReview of key points about estimators
Review of key points about estimators Populations can be at least partially described by population parameters Population parameters include: mean, proportion, variance, etc. Because populations are often
More informationApplied Statistics I
Applied Statistics I Liang Zhang Department of Mathematics, University of Utah July 14, 2008 Liang Zhang (UofU) Applied Statistics I July 14, 2008 1 / 18 Point Estimation Liang Zhang (UofU) Applied Statistics
More informationInterval estimation. September 29, Outline Basic ideas Sampling variation and CLT Interval estimation using X More general problems
Interval estimation September 29, 2017 STAT 151 Class 7 Slide 1 Outline of Topics 1 Basic ideas 2 Sampling variation and CLT 3 Interval estimation using X 4 More general problems STAT 151 Class 7 Slide
More informationLikelihood Methods of Inference. Toss coin 6 times and get Heads twice.
Methods of Inference Toss coin 6 times and get Heads twice. p is probability of getting H. Probability of getting exactly 2 heads is 15p 2 (1 p) 4 This function of p, is likelihood function. Definition:
More informationPoint Estimation. Some General Concepts of Point Estimation. Example. Estimator quality
Point Estimation Some General Concepts of Point Estimation Statistical inference = conclusions about parameters Parameters == population characteristics A point estimate of a parameter is a value (based
More informationChapter 5. Statistical inference for Parametric Models
Chapter 5. Statistical inference for Parametric Models Outline Overview Parameter estimation Method of moments How good are method of moments estimates? Interval estimation Statistical Inference for Parametric
More informationReview of key points about estimators
Review of key points about estimators Populations can be at least partially described by population parameters Population parameters include: mean, proportion, variance, etc. Because populations are often
More informationChapter 8: Sampling distributions of estimators Sections
Chapter 8: Sampling distributions of estimators Sections 8.1 Sampling distribution of a statistic 8.2 The Chi-square distributions 8.3 Joint Distribution of the sample mean and sample variance Skip: p.
More informationLecture Notes 6. Assume F belongs to a family of distributions, (e.g. F is Normal), indexed by some parameter θ.
Sufficient Statistics Lecture Notes 6 Sufficiency Data reduction in terms of a particular statistic can be thought of as a partition of the sample space X. Definition T is sufficient for θ if the conditional
More informationActuarial Mathematics and Statistics Statistics 5 Part 2: Statistical Inference Tutorial Problems
Actuarial Mathematics and Statistics Statistics 5 Part 2: Statistical Inference Tutorial Problems Spring 2005 1. Which of the following statements relate to probabilities that can be interpreted as frequencies?
More informationExercise. Show the corrected sample variance is an unbiased estimator of population variance. S 2 = n i=1 (X i X ) 2 n 1. Exercise Estimation
Exercise Show the corrected sample variance is an unbiased estimator of population variance. S 2 = n i=1 (X i X ) 2 n 1 Exercise S 2 = = = = n i=1 (X i x) 2 n i=1 = (X i µ + µ X ) 2 = n 1 n 1 n i=1 ((X
More informationVersion A. Problem 1. Let X be the continuous random variable defined by the following pdf: 1 x/2 when 0 x 2, f(x) = 0 otherwise.
Math 224 Q Exam 3A Fall 217 Tues Dec 12 Version A Problem 1. Let X be the continuous random variable defined by the following pdf: { 1 x/2 when x 2, f(x) otherwise. (a) Compute the mean µ E[X]. E[X] x
More informationStatistical analysis and bootstrapping
Statistical analysis and bootstrapping p. 1/15 Statistical analysis and bootstrapping Michel Bierlaire michel.bierlaire@epfl.ch Transport and Mobility Laboratory Statistical analysis and bootstrapping
More information4.1 Introduction Estimating a population mean The problem with estimating a population mean with a sample mean: an example...
Chapter 4 Point estimation Contents 4.1 Introduction................................... 2 4.2 Estimating a population mean......................... 2 4.2.1 The problem with estimating a population mean
More informationPoint Estimation. Copyright Cengage Learning. All rights reserved.
6 Point Estimation Copyright Cengage Learning. All rights reserved. 6.2 Methods of Point Estimation Copyright Cengage Learning. All rights reserved. Methods of Point Estimation The definition of unbiasedness
More informationPoint Estimation. Principle of Unbiased Estimation. When choosing among several different estimators of θ, select one that is unbiased.
Point Estimation Point Estimation Definition A point estimate of a parameter θ is a single number that can be regarded as a sensible value for θ. A point estimate is obtained by selecting a suitable statistic
More informationChapter 7: Estimation Sections
1 / 40 Chapter 7: Estimation Sections 7.1 Statistical Inference Bayesian Methods: Chapter 7 7.2 Prior and Posterior Distributions 7.3 Conjugate Prior Distributions 7.4 Bayes Estimators Frequentist Methods:
More information6. Genetics examples: Hardy-Weinberg Equilibrium
PBCB 206 (Fall 2006) Instructor: Fei Zou email: fzou@bios.unc.edu office: 3107D McGavran-Greenberg Hall Lecture 4 Topics for Lecture 4 1. Parametric models and estimating parameters from data 2. Method
More informationLearning From Data: MLE. Maximum Likelihood Estimators
Learning From Data: MLE Maximum Likelihood Estimators 1 Parameter Estimation Assuming sample x1, x2,..., xn is from a parametric distribution f(x θ), estimate θ. E.g.: Given sample HHTTTTTHTHTTTHH of (possibly
More informationCS340 Machine learning Bayesian model selection
CS340 Machine learning Bayesian model selection Bayesian model selection Suppose we have several models, each with potentially different numbers of parameters. Example: M0 = constant, M1 = straight line,
More informationPractice Exercises for Midterm Exam ST Statistical Theory - II The ACTUAL exam will consists of less number of problems.
Practice Exercises for Midterm Exam ST 522 - Statistical Theory - II The ACTUAL exam will consists of less number of problems. 1. Suppose X i F ( ) for i = 1,..., n, where F ( ) is a strictly increasing
More informationPROBABILITY AND STATISTICS
Monday, January 12, 2015 1 PROBABILITY AND STATISTICS Zhenyu Ye January 12, 2015 Monday, January 12, 2015 2 References Ch10 of Experiments in Modern Physics by Melissinos. Particle Physics Data Group Review
More informationMuch of what appears here comes from ideas presented in the book:
Chapter 11 Robust statistical methods Much of what appears here comes from ideas presented in the book: Huber, Peter J. (1981), Robust statistics, John Wiley & Sons (New York; Chichester). There are many
More informationmay be of interest. That is, the average difference between the estimator and the truth. Estimators with Bias(ˆθ) = 0 are called unbiased.
1 Evaluating estimators Suppose you observe data X 1,..., X n that are iid observations with distribution F θ indexed by some parameter θ. When trying to estimate θ, one may be interested in determining
More informationدرس هفتم یادگیري ماشین. (Machine Learning) دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی
یادگیري ماشین توزیع هاي نمونه و تخمین نقطه اي پارامترها Sampling Distributions and Point Estimation of Parameter (Machine Learning) دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی درس هفتم 1 Outline Introduction
More informationA New Hybrid Estimation Method for the Generalized Pareto Distribution
A New Hybrid Estimation Method for the Generalized Pareto Distribution Chunlin Wang Department of Mathematics and Statistics University of Calgary May 18, 2011 A New Hybrid Estimation Method for the GPD
More informationEE641 Digital Image Processing II: Purdue University VISE - October 29,
EE64 Digital Image Processing II: Purdue University VISE - October 9, 004 The EM Algorithm. Suffient Statistics and Exponential Distributions Let p(y θ) be a family of density functions parameterized by
More informationSTAT 509: Statistics for Engineers Dr. Dewei Wang. Copyright 2014 John Wiley & Sons, Inc. All rights reserved.
STAT 509: Statistics for Engineers Dr. Dewei Wang Applied Statistics and Probability for Engineers Sixth Edition Douglas C. Montgomery George C. Runger 7 Point CHAPTER OUTLINE 7-1 Point Estimation 7-2
More informationChapter 4: Commonly Used Distributions. Statistics for Engineers and Scientists Fourth Edition William Navidi
Chapter 4: Commonly Used Distributions Statistics for Engineers and Scientists Fourth Edition William Navidi 2014 by Education. This is proprietary material solely for authorized instructor use. Not authorized
More informationCSE 312 Winter Learning From Data: Maximum Likelihood Estimators (MLE)
CSE 312 Winter 2017 Learning From Data: Maximum Likelihood Estimators (MLE) 1 Parameter Estimation Given: independent samples x1, x2,..., xn from a parametric distribution f(x θ) Goal: estimate θ. Not
More informationProbability Theory and Simulation Methods. April 9th, Lecture 20: Special distributions
April 9th, 2018 Lecture 20: Special distributions Week 1 Chapter 1: Axioms of probability Week 2 Chapter 3: Conditional probability and independence Week 4 Chapters 4, 6: Random variables Week 9 Chapter
More informationThe Bernoulli distribution
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License. Your use of this material constitutes acceptance of that license and the conditions of use of materials on this
More informationMATH 3200 Exam 3 Dr. Syring
. Suppose n eligible voters are polled (randomly sampled) from a population of size N. The poll asks voters whether they support or do not support increasing local taxes to fund public parks. Let M be
More informationMVE051/MSG Lecture 7
MVE051/MSG810 2017 Lecture 7 Petter Mostad Chalmers November 20, 2017 The purpose of collecting and analyzing data Purpose: To build and select models for parts of the real world (which can be used for
More informationSimulation Wrap-up, Statistics COS 323
Simulation Wrap-up, Statistics COS 323 Today Simulation Re-cap Statistics Variance and confidence intervals for simulations Simulation wrap-up FYI: No class or office hours Thursday Simulation wrap-up
More informationChapter 3 Common Families of Distributions. Definition 3.4.1: A family of pmfs or pdfs is called exponential family if it can be expressed as
Lecture 0 on BST 63: Statistical Theory I Kui Zhang, 09/9/008 Review for the previous lecture Definition: Several continuous distributions, including uniform, gamma, normal, Beta, Cauchy, double exponential
More informationChapter 7: Point Estimation and Sampling Distributions
Chapter 7: Point Estimation and Sampling Distributions Seungchul Baek Department of Statistics, University of South Carolina STAT 509: Statistics for Engineers 1 / 20 Motivation In chapter 3, we learned
More informationHomework Assignments
Homework Assignments Week 1 (p. 57) #4.1, 4., 4.3 Week (pp 58 6) #4.5, 4.6, 4.8(a), 4.13, 4.0, 4.6(b), 4.8, 4.31, 4.34 Week 3 (pp 15 19) #1.9, 1.1, 1.13, 1.15, 1.18 (pp 9 31) #.,.6,.9 Week 4 (pp 36 37)
More informationHardy Weinberg Model- 6 Genotypes
Hardy Weinberg Model- 6 Genotypes Silvelyn Zwanzig Hardy -Weinberg with six genotypes. In a large population of plants (Mimulus guttatus there are possible alleles S, I, F at one locus resulting in six
More informationUQ, STAT2201, 2017, Lectures 3 and 4 Unit 3 Probability Distributions.
UQ, STAT2201, 2017, Lectures 3 and 4 Unit 3 Probability Distributions. Random Variables 2 A random variable X is a numerical (integer, real, complex, vector etc.) summary of the outcome of the random experiment.
More informationBIO5312 Biostatistics Lecture 5: Estimations
BIO5312 Biostatistics Lecture 5: Estimations Yujin Chung September 27th, 2016 Fall 2016 Yujin Chung Lec5: Estimations Fall 2016 1/34 Recap Yujin Chung Lec5: Estimations Fall 2016 2/34 Today s lecture and
More informationSampling and sampling distribution
Sampling and sampling distribution September 12, 2017 STAT 101 Class 5 Slide 1 Outline of Topics 1 Sampling 2 Sampling distribution of a mean 3 Sampling distribution of a proportion STAT 101 Class 5 Slide
More informationThe Constant Expected Return Model
Chapter 1 The Constant Expected Return Model The first model of asset returns we consider is the very simple constant expected return (CER)model.Thismodelassumesthatanasset sreturnover time is normally
More information8.1 Estimation of the Mean and Proportion
8.1 Estimation of the Mean and Proportion Statistical inference enables us to make judgments about a population on the basis of sample information. The mean, standard deviation, and proportions of a population
More informationChapter 7: Estimation Sections
Chapter 7: Estimation Sections 7.1 Statistical Inference Bayesian Methods: 7.2 Prior and Posterior Distributions 7.3 Conjugate Prior Distributions Frequentist Methods: 7.5 Maximum Likelihood Estimators
More informationThe method of Maximum Likelihood.
Maximum Likelihood The method of Maximum Likelihood. In developing the least squares estimator - no mention of probabilities. Minimize the distance between the predicted linear regression and the observed
More informationECE 340 Probabilistic Methods in Engineering M/W 3-4:15. Lecture 10: Continuous RV Families. Prof. Vince Calhoun
ECE 340 Probabilistic Methods in Engineering M/W 3-4:15 Lecture 10: Continuous RV Families Prof. Vince Calhoun 1 Reading This class: Section 4.4-4.5 Next class: Section 4.6-4.7 2 Homework 3.9, 3.49, 4.5,
More informationChapter 5. Sampling Distributions
Lecture notes, Lang Wu, UBC 1 Chapter 5. Sampling Distributions 5.1. Introduction In statistical inference, we attempt to estimate an unknown population characteristic, such as the population mean, µ,
More informationProblems from 9th edition of Probability and Statistical Inference by Hogg, Tanis and Zimmerman:
Math 224 Fall 207 Homework 5 Drew Armstrong Problems from 9th edition of Probability and Statistical Inference by Hogg, Tanis and Zimmerman: Section 3., Exercises 3, 0. Section 3.3, Exercises 2, 3, 0,.
More information1. Covariance between two variables X and Y is denoted by Cov(X, Y) and defined by. Cov(X, Y ) = E(X E(X))(Y E(Y ))
Correlation & Estimation - Class 7 January 28, 2014 Debdeep Pati Association between two variables 1. Covariance between two variables X and Y is denoted by Cov(X, Y) and defined by Cov(X, Y ) = E(X E(X))(Y
More informationComputer Statistics with R
MAREK GAGOLEWSKI KONSTANCJA BOBECKA-WESO LOWSKA PRZEMYS LAW GRZEGORZEWSKI Computer Statistics with R 5. Point Estimation Faculty of Mathematics and Information Science Warsaw University of Technology []
More information4-1. Chapter 4. Commonly Used Distributions by The McGraw-Hill Companies, Inc. All rights reserved.
4-1 Chapter 4 Commonly Used Distributions 2014 by The Companies, Inc. All rights reserved. Section 4.1: The Bernoulli Distribution 4-2 We use the Bernoulli distribution when we have an experiment which
More informationConfidence Intervals Introduction
Confidence Intervals Introduction A point estimate provides no information about the precision and reliability of estimation. For example, the sample mean X is a point estimate of the population mean μ
More informationRowan University Department of Electrical and Computer Engineering
Rowan University Department of Electrical and Computer Engineering Estimation and Detection Theory Fall 203 Practice EXAM Solution This is a closed book exam. One letter-size sheet is allowed. There are
More informationPractice Exam 1. Loss Amount Number of Losses
Practice Exam 1 1. You are given the following data on loss sizes: An ogive is used as a model for loss sizes. Determine the fitted median. Loss Amount Number of Losses 0 1000 5 1000 5000 4 5000 10000
More information25 Increasing and Decreasing Functions
- 25 Increasing and Decreasing Functions It is useful in mathematics to define whether a function is increasing or decreasing. In this section we will use the differential of a function to determine this
More informationProbability & Statistics
Probability & Statistics BITS Pilani K K Birla Goa Campus Dr. Jajati Keshari Sahoo Department of Mathematics Statistics Descriptive statistics Inferential statistics /38 Inferential Statistics 1. Involves:
More informationChapter 7: Estimation Sections
1 / 31 : Estimation Sections 7.1 Statistical Inference Bayesian Methods: 7.2 Prior and Posterior Distributions 7.3 Conjugate Prior Distributions 7.4 Bayes Estimators Frequentist Methods: 7.5 Maximum Likelihood
More informationModule 4: Point Estimation Statistics (OA3102)
Module 4: Point Estimation Statistics (OA3102) Professor Ron Fricker Naval Postgraduate School Monterey, California Reading assignment: WM&S chapter 8.1-8.4 Revision: 1-12 1 Goals for this Module Define
More informationMulti-armed bandit problems
Multi-armed bandit problems Stochastic Decision Theory (2WB12) Arnoud den Boer 13 March 2013 Set-up 13 and 14 March: Lectures. 20 and 21 March: Paper presentations (Four groups, 45 min per group). Before
More informationFinancial Risk Management
Financial Risk Management Professor: Thierry Roncalli Evry University Assistant: Enareta Kurtbegu Evry University Tutorial exercices #4 1 Correlation and copulas 1. The bivariate Gaussian copula is given
More informationNormal Distribution. Notes. Normal Distribution. Standard Normal. Sums of Normal Random Variables. Normal. approximation of Binomial.
Lecture 21,22, 23 Text: A Course in Probability by Weiss 8.5 STAT 225 Introduction to Probability Models March 31, 2014 Standard Sums of Whitney Huang Purdue University 21,22, 23.1 Agenda 1 2 Standard
More informationStatistics for Business and Economics
Statistics for Business and Economics Chapter 5 Continuous Random Variables and Probability Distributions Ch. 5-1 Probability Distributions Probability Distributions Ch. 4 Discrete Continuous Ch. 5 Probability
More informationWeek 2 Quantitative Analysis of Financial Markets Hypothesis Testing and Confidence Intervals
Week 2 Quantitative Analysis of Financial Markets Hypothesis Testing and Confidence Intervals Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg :
More informationIEOR E4703: Monte-Carlo Simulation
IEOR E4703: Monte-Carlo Simulation Simulation Efficiency and an Introduction to Variance Reduction Methods Martin Haugh Department of Industrial Engineering and Operations Research Columbia University
More informationLecture 23. STAT 225 Introduction to Probability Models April 4, Whitney Huang Purdue University. Normal approximation to Binomial
Lecture 23 STAT 225 Introduction to Probability Models April 4, 2014 approximation Whitney Huang Purdue University 23.1 Agenda 1 approximation 2 approximation 23.2 Characteristics of the random variable:
More informationChapter 4 Continuous Random Variables and Probability Distributions
Chapter 4 Continuous Random Variables and Probability Distributions Part 2: More on Continuous Random Variables Section 4.5 Continuous Uniform Distribution Section 4.6 Normal Distribution 1 / 27 Continuous
More informationSection 0: Introduction and Review of Basic Concepts
Section 0: Introduction and Review of Basic Concepts Carlos M. Carvalho The University of Texas McCombs School of Business mccombs.utexas.edu/faculty/carlos.carvalho/teaching 1 Getting Started Syllabus
More informationRandom Variables Handout. Xavier Vilà
Random Variables Handout Xavier Vilà Course 2004-2005 1 Discrete Random Variables. 1.1 Introduction 1.1.1 Definition of Random Variable A random variable X is a function that maps each possible outcome
More information1/2 2. Mean & variance. Mean & standard deviation
Question # 1 of 10 ( Start time: 09:46:03 PM ) Total Marks: 1 The probability distribution of X is given below. x: 0 1 2 3 4 p(x): 0.73? 0.06 0.04 0.01 What is the value of missing probability? 0.54 0.16
More informationStochastic Differential Equations in Finance and Monte Carlo Simulations
Stochastic Differential Equations in Finance and Department of Statistics and Modelling Science University of Strathclyde Glasgow, G1 1XH China 2009 Outline Stochastic Modelling in Asset Prices 1 Stochastic
More informationMTH6154 Financial Mathematics I Stochastic Interest Rates
MTH6154 Financial Mathematics I Stochastic Interest Rates Contents 4 Stochastic Interest Rates 45 4.1 Fixed Interest Rate Model............................ 45 4.2 Varying Interest Rate Model...........................
More informationCentral Limit Theorem, Joint Distributions Spring 2018
Central Limit Theorem, Joint Distributions 18.5 Spring 218.5.4.3.2.1-4 -3-2 -1 1 2 3 4 Exam next Wednesday Exam 1 on Wednesday March 7, regular room and time. Designed for 1 hour. You will have the full
More informationQualifying Exam Solutions: Theoretical Statistics
Qualifying Exam Solutions: Theoretical Statistics. (a) For the first sampling plan, the expectation of any statistic W (X, X,..., X n ) is a polynomial of θ of degree less than n +. Hence τ(θ) cannot have
More informationEstimating parameters 5.3 Confidence Intervals 5.4 Sample Variance
Estimating parameters 5.3 Confidence Intervals 5.4 Sample Variance Prof. Tesler Math 186 Winter 2017 Prof. Tesler Ch. 5: Confidence Intervals, Sample Variance Math 186 / Winter 2017 1 / 29 Estimating parameters
More informationThe Vasicek Distribution
The Vasicek Distribution Dirk Tasche Lloyds TSB Bank Corporate Markets Rating Systems dirk.tasche@gmx.net Bristol / London, August 2008 The opinions expressed in this presentation are those of the author
More informationStatistics & Flood Frequency Chapter 3. Dr. Philip B. Bedient
Statistics & Flood Frequency Chapter 3 Dr. Philip B. Bedient Predicting FLOODS Flood Frequency Analysis n Statistical Methods to evaluate probability exceeding a particular outcome - P (X >20,000 cfs)
More informationMAFS Computational Methods for Pricing Structured Products
MAFS550 - Computational Methods for Pricing Structured Products Solution to Homework Two Course instructor: Prof YK Kwok 1 Expand f(x 0 ) and f(x 0 x) at x 0 into Taylor series, where f(x 0 ) = f(x 0 )
More informationUNIVERSITY OF VICTORIA Midterm June 2014 Solutions
UNIVERSITY OF VICTORIA Midterm June 04 Solutions NAME: STUDENT NUMBER: V00 Course Name & No. Inferential Statistics Economics 46 Section(s) A0 CRN: 375 Instructor: Betty Johnson Duration: hour 50 minutes
More informationGenerating Random Numbers
Generating Random Numbers Aim: produce random variables for given distribution Inverse Method Let F be the distribution function of an univariate distribution and let F 1 (y) = inf{x F (x) y} (generalized
More informationChapter 7: SAMPLING DISTRIBUTIONS & POINT ESTIMATION OF PARAMETERS
Chapter 7: SAMPLING DISTRIBUTIONS & POINT ESTIMATION OF PARAMETERS Part 1: Introduction Sampling Distributions & the Central Limit Theorem Point Estimation & Estimators Sections 7-1 to 7-2 Sample data
More informationGPD-POT and GEV block maxima
Chapter 3 GPD-POT and GEV block maxima This chapter is devoted to the relation between POT models and Block Maxima (BM). We only consider the classical frameworks where POT excesses are assumed to be GPD,
More informationChapter 5: Statistical Inference (in General)
Chapter 5: Statistical Inference (in General) Shiwen Shen University of South Carolina 2016 Fall Section 003 1 / 17 Motivation In chapter 3, we learn the discrete probability distributions, including Bernoulli,
More informationChapter 4 Continuous Random Variables and Probability Distributions
Chapter 4 Continuous Random Variables and Probability Distributions Part 2: More on Continuous Random Variables Section 4.5 Continuous Uniform Distribution Section 4.6 Normal Distribution 1 / 28 One more
More informationOptimal Stopping. Nick Hay (presentation follows Thomas Ferguson s Optimal Stopping and Applications) November 6, 2008
(presentation follows Thomas Ferguson s and Applications) November 6, 2008 1 / 35 Contents: Introduction Problems Markov Models Monotone Stopping Problems Summary 2 / 35 The Secretary problem You have
More informationChapter 5 Univariate time-series analysis. () Chapter 5 Univariate time-series analysis 1 / 29
Chapter 5 Univariate time-series analysis () Chapter 5 Univariate time-series analysis 1 / 29 Time-Series Time-series is a sequence fx 1, x 2,..., x T g or fx t g, t = 1,..., T, where t is an index denoting
More informationReview for Final Exam Spring 2014 Jeremy Orloff and Jonathan Bloom
Review for Final Exam 18.05 Spring 2014 Jeremy Orloff and Jonathan Bloom THANK YOU!!!! JON!! PETER!! RUTHI!! ERIKA!! ALL OF YOU!!!! Probability Counting Sets Inclusion-exclusion principle Rule of product
More informationChapter 7. Sampling Distributions and the Central Limit Theorem
Chapter 7. Sampling Distributions and the Central Limit Theorem 1 Introduction 2 Sampling Distributions related to the normal distribution 3 The central limit theorem 4 The normal approximation to binomial
More information